k8s 创建应用

创建应用

创建应用,指定端口,镜像,名称,Pod 的数量等。

1
[root@k8s001 ~]# kubectl run nginx-deploy --image=nginx:v1.14-alpine --port=80 --replicas=1 --dry-run=true
1
2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deploy created (dry run)

nginx-deploy 是创建的名称,–image=nginx:v1.14-alpine 是镜像,–port=80 是端口,–replicas=1 表示一个 Pod,–dry-run=true 表示干运行(检查语法,而不会真正的创建)

接下来真正运行:

1
kubectl run nginx-deploy --image=nginx:v1.14-alpine --port=80 --replicas=1

查看 pods,发现正在创建中。NAME 是刚刚指定的名称跟上 hash 值,STATUS 为 ContainerCreating 表示正在创建中,RESTARTS 为 0 表示重启次数为 0

1
2
3
[root@k8s001 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deploy-554c975bb9-gllhb 0/1 ContainerCreating 0 99s

查看 deployments 发现

1
2
3
[root@k8s001 ~]# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 0/1 1 0 2m38s

过了很长时间,发现还是没有创建完成,查看一下详细信息, 发现调度在了 002 的机器上。不过没有找到为什么一直没有创建出来的原因。

1
2
3
[root@k8s001 ~]# kubectl get pods -o wide 
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deploy-554c975bb9-gllhb 0/1 ContainerCreating 0 9m24s <none> k8s002 <none> <none>

查看系统 pods 的信息,发现 flannel 一直没有创建成功,并且一直在重启。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@k8s001 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deploy-554c975bb9-gllhb 0/1 ContainerCreating 0 13m
kube-system coredns-fb8b8dccf-n4kq5 0/1 ContainerCreating 0 34m
kube-system coredns-fb8b8dccf-sc5fl 0/1 ContainerCreating 0 34m
kube-system etcd-k8s001 1/1 Running 0 33m
kube-system kube-apiserver-k8s001 1/1 Running 0 32m
kube-system kube-controller-manager-k8s001 1/1 Running 0 33m
kube-system kube-flannel-ds-amd64-9nrnk 0/1 CrashLoopBackOff 10 28m
kube-system kube-flannel-ds-amd64-nmp42 0/1 CrashLoopBackOff 10 28m
kube-system kube-flannel-ds-amd64-rj5pn 0/1 CrashLoopBackOff 10 28m
kube-system kube-proxy-fbnns 1/1 Running 0 34m
kube-system kube-proxy-trnx8 1/1 Running 0 33m
kube-system kube-proxy-vsphn 1/1 Running 0 29m
kube-system kube-scheduler-k8s001 1/1 Running 0 33m

查看具体日志,报错日志为

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@k8s001 ~]# kubectl -n kube-system logs kube-flannel-ds-amd64-9nrnk
I0504 09:17:58.963906 1 main.go:514] Determining IP address of default interface
I0504 09:17:58.964230 1 main.go:527] Using interface with name eth0 and address 172.20.79.58
I0504 09:17:58.964255 1 main.go:544] Defaulting external address to interface address (172.20.79.58)
I0504 09:17:59.066660 1 kube.go:126] Waiting 10m0s for node controller to sync
I0504 09:17:59.066821 1 kube.go:309] Starting kube subnet manager
I0504 09:18:00.066861 1 kube.go:133] Node controller sync successful
I0504 09:18:00.066903 1 main.go:244] Created subnet manager: Kubernetes Subnet Manager - k8s001
I0504 09:18:00.066916 1 main.go:247] Installing signal handlers
I0504 09:18:00.067101 1 main.go:386] Found network config - Backend type: vxlan
I0504 09:18:00.067173 1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
E0504 09:18:00.067410 1 main.go:289] Error registering network: failed to acquire lease: node "k8s001" pod cidr not assigned
I0504 09:18:00.067464 1 main.go:366] Stopping shutdownHandler...

Error registering network: failed to acquire lease: node “k8s001” pod cidr not assigned 可能是 init 的时候没有指定 pods 网络导致的。

从头再来

重新搭建集群

经过反复的尝试,按照如下的配置是没有问题的,其中 –pod-network-cidr 10.244.0.0/16 中的 10.244.0.0/16 是 Flannel 中的 ConfigMap 中的部分,网上说这里要和里面的值一致,因为以前没有加上这个参数,所有搭建好之后Master 和 Node 之间一致都不通。

1
kubeadm init --apiserver-advertise-address 172.20.245.180 --pod-network-cidr 10.244.0.0/16 --ignore-preflight-errors=All

同样的,创建出两个 Node 节点,并将其加入 Master 中。

1
2
3
4
5
[root@k8s001 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s001 Ready master 15m v1.14.1
k8s002 Ready <none> 5m37s v1.14.1
k8s003 Ready <none> 16s v1.14.1

加入之后机器之间的连通性互相也没问题。下面是 Flannel 的状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s001 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-7qbq9 1/1 Running 0 15m
kube-system coredns-fb8b8dccf-9hplg 1/1 Running 0 15m
kube-system etcd-k8s001 1/1 Running 0 14m
kube-system kube-apiserver-k8s001 1/1 Running 0 14m
kube-system kube-controller-manager-k8s001 1/1 Running 0 14m
kube-system kube-flannel-ds-amd64-9gnpw 1/1 Running 0 14m
kube-system kube-flannel-ds-amd64-lwc5w 1/1 Running 0 29s
kube-system kube-flannel-ds-amd64-wgrct 1/1 Running 0 5m50s
kube-system kube-proxy-d4v6l 1/1 Running 0 29s
kube-system kube-proxy-dnlr4 1/1 Running 0 5m50s
kube-system kube-proxy-kvvjs 1/1 Running 0 15m
kube-system kube-scheduler-k8s001 1/1 Running 0 14m

牛刀小试

创建应用
1
kubectl run nginx-deploy --image=nginx:v1.14-alpine --port=80 --replicas=1

镜像拉取错误,发现镜像名称不对

1
2
3
4
5
6
[root@k8s001 ~]# kubectl get pods 
NAME READY STATUS RESTARTS AGE
nginx-deploy-554c975bb9-bddnw 0/1 ErrImagePull 0 3m33s

[root@k8s001 ~]# kubectl logs nginx-deploy-554c975bb9-bddnw
Error from server (BadRequest): container "nginx-deploy" in pod "nginx-deploy-554c975bb9-bddnw" is waiting to start: trying and failing to pull image

这时使用 delete 是删除不掉 Pod 的,删除后马上又会在启动一个,因为我们的 replicas=1,所以我想到将其个数设置为 0。–replicas=0 是设置个数,nginx-deploy 是 deployment 的名称。

1
2
[root@k8s001 ~]# kubectl scale --replicas=0 deployment nginx-deploy 
deployment.extensions/nginx-deploy scaled

并删除相关的 deployment

1
2
3
[root@k8s001 ~]# kubectl delete deployment nginx-deploy
deployment.extensions "nginx-deploy" deleted
[root@k8s001 ~]#

之后再看 pod 的信息,发现已经结束了。 -o wide 是查看更消息的信息

1
2
[root@k8s001 ~]# kubectl get pods -o wide 
No resources found.

重新创建应用,这次不加镜像的版本,默认拉取最新的 latest。

1
kubectl run nginx-deploy --image=nginx --port=80 --replicas=1

可以看到 API Server 调度给了 002 的机器,并且应用已经启动了。10.244.1.3 是 pod 的 ip,我们可以通过 curl 进行访问测试。

1
2
3
[root@k8s001 ~]# kubectl get pods -o wide 
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deploy-5c9b546997-xnthf 1/1 Running 0 11s 10.244.1.3 k8s002 <none> <none>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@k8s001 ~]# curl 10.244.1.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s001 ~]#
创建 Service

通过上面的步骤,我们已经成功的搭建了一个正常工作的集群,并且运行了只有一个 Pod 的 Nginx 应用,并且通过 Pod 的 IP 进行了访问,测试了连通性。但是因为 Pod 是有生命周期的,所有我们不能依赖 Pod 的 IP 进行访问,而是要创建出一个 Service,通过 Service 来访问 Pod。

1
kubectl expose deployment nginx-deploy --name=nginx-service --port=8080 --target-port=80 --protocol=TCP --type=ClusterIP

nginx-deploy 是 deployment 名称,–name=nginx-service 是即将创建的 svc 的名称,–port=8080 是 svc 访问的地址,-target-port=80 是应用的端口,–protocol=TCP 设置协议为 TCP,默认就是 TCP,–type=ClusterIP 设置 svc 的类型为 ClusterIP,ClusterIP 只能集群内部访问,并没有对外访问,默认就是 ClusterIP。

查看 service

1
2
3
4
[root@k8s001 ~]# kubectl get svc 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 48m
nginx-service ClusterIP 10.100.116.103 <none> 8080/TCP 28s

此时我们访问一下 service ,测试一下连通性

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@k8s001 ~]# curl 10.100.116.103:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8s001 ~]#

还可以查看 svc 的详情

1
2
3
4
[root@k8s001 ~]# kubectl get svc 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m
nginx-service ClusterIP 10.100.116.103 <none> 8080/TCP 6m8s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@k8s001 ~]# kubectl describe svc nginx-service
Name: nginx-service
Namespace: default
Labels: run=nginx-deploy
Annotations: <none>
Selector: run=nginx-deploy
Type: ClusterIP
IP: 10.100.116.103
Port: <unset> 8080/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.3:80
Session Affinity: None
Events: <none>
[root@k8s001 ~]#

这里有个疑问,既然 Pod 是有生命周期,且不稳定的,那么 service 是怎么将其关联起来的呢?IP 应该是不可能的,因为 IP 在变化。查阅相关信息后,Pod 和 Service 的联系,也就是哪些 Pod 是属于哪一个 Service 的是通过 Pod 的 label 来决定的。我们在创建 svc 的时候指定了 deployment nginx-deploy,我们查看 Pod 的 label 相关信息。

1
2
3
[root@k8s001 ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deploy-5c9b546997-xnthf 1/1 Running 0 26m pod-template-hash=5c9b546997,run=nginx-deploy

还可以查看,deployment 的详细信息

1
2
3
[root@k8s001 ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deploy 1/1 1 1 27m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[root@k8s001 ~]# kubectl describe deployment nginx-deploy
Name: nginx-deploy
Namespace: default
CreationTimestamp: Sat, 04 May 2019 22:45:36 +0800
Labels: run=nginx-deploy
Annotations: deployment.kubernetes.io/revision: 1
Selector: run=nginx-deploy
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: run=nginx-deploy
Containers:
nginx-deploy:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deploy-5c9b546997 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 28m deployment-controller Scaled up replica set nginx-deploy-5c9b546997 to 1

迎刃而解

上面我们创建了一个 Pod 的应用,并且创建了一个 Service 来进行关联。接下来我们试试更高级的部分,发布多个 Pod 的应用和进行滚动更新,并且进行回滚。

创建应用
1
2
3
[root@k8s001 ~]# kubectl run myapp --image=ikubernetes/myapp:v1 --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/myapp created
1
2
3
4
[root@k8s001 ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
myapp 2/2 2 2 32s
nginx-deploy 1/1 1 1 35m
1
2
3
4
5
[root@k8s001 ~]# kubectl get pods -o wide 
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-5bc569c47d-42s48 1/1 Running 0 103s 10.244.2.4 k8s003 <none> <none>
myapp-5bc569c47d-dz6lb 1/1 Running 0 103s 10.244.1.4 k8s002 <none> <none>
nginx-deploy-5c9b546997-xnthf 1/1 Running 0 37m 10.244.1.3 k8s002 <none> <none>

可以看到 myapp 调度到了 002 和 003。并且已经运行。

创建 Service

这次我们创建一个对外的 service

1
[root@k8s001 ~]# kubectl expose deployment myapp --name=myapp --port=8080 --target-port=80 --protocol=TCP --type=NodePort
1
2
3
4
5
[root@k8s001 ~]# kubectl get svc -o wide 
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 91m <none>
myapp NodePort 10.111.135.52 <none> 8080:31953/TCP 71s run=myapp
nginx-service ClusterIP 10.100.116.103 <none> 8080/TCP 43m run=nginx-deploy

创建出 Service 后,会有一个内部和外部的端口,10.111.135.52 是内部 sercice 的地址。我们可以进行访问测试

1
2
[root@k8s001 ~]# curl 10.111.135.52:8080
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

因为 myapp 在 002 和 003 node 上都有, 所以我们用 node 002 和 003 的外网 IP 进行访问测试,端口为 31953。我们在我本机(外网)进行访问。可以看到 service 自动给我们进行了负载均衡。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yanrunshadeMacBook-Pro:~ rex$ while true ;do curl http://47.254.95.246:31953/hostname.html; done;
myapp-5bc569c47d-42s48
myapp-5bc569c47d-42s48
myapp-5bc569c47d-dz6lb
myapp-5bc569c47d-42s48
myapp-5bc569c47d-42s48
myapp-5bc569c47d-42s48
myapp-5bc569c47d-42s48
myapp-5bc569c47d-42s48
myapp-5bc569c47d-42s48
myapp-5bc569c47d-42s48
myapp-5bc569c47d-dz6lb
myapp-5bc569c47d-42s48
myapp-5bc569c47d-dz6lb
myapp-5bc569c47d-42s48
弹性伸缩

查看以前的 Pod

1
2
3
4
5
[root@k8s001 ~]# kubectl get pods 
NAME READY STATUS RESTARTS AGE
myapp-5bc569c47d-42s48 1/1 Running 0 35m
myapp-5bc569c47d-dz6lb 1/1 Running 0 35m
nginx-deploy-5c9b546997-xnthf 1/1 Running 0 71m

Pod 实例设置为 5 个

1
[root@k8s001 ~]# kubectl scale --replicas=5 deployment myapp

查看现在的 Pod 信息

1
2
3
4
5
6
7
8
[root@k8s001 ~]# kubectl get pods 
NAME READY STATUS RESTARTS AGE
myapp-5bc569c47d-42s48 1/1 Running 0 37m
myapp-5bc569c47d-6nvgp 1/1 Running 0 13s
myapp-5bc569c47d-ccxs7 1/1 Running 0 13s
myapp-5bc569c47d-dz6lb 1/1 Running 0 37m
myapp-5bc569c47d-ml8b9 1/1 Running 0 13s
nginx-deploy-5c9b546997-xnthf 1/1 Running 0 72m

访问测试, 可以返回的 Pod 也发生了变化,而且是服务没有停止的情况下进行弹性伸缩。

1
2
3
4
5
6
7
8
9
yanrunshadeMacBook-Pro:~ rex$ while true ;do curl http://47.254.95.246:31953/hostname.html; done;
myapp-5bc569c47d-6nvgp
myapp-5bc569c47d-ml8b9
myapp-5bc569c47d-42s48
myapp-5bc569c47d-ccxs7
myapp-5bc569c47d-42s48
myapp-5bc569c47d-ml8b9
myapp-5bc569c47d-ml8b9
myapp-5bc569c47d-ml8b9

弹性伸缩的应用接口版本为 v1,接下来让应用进行滚动升级。

1
2
3
4
5
anrunshadeMacBook-Pro:~ rex$ while true ;do curl http://47.254.95.246:31953; done;
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

滚动升级

将应用的镜像升级为 v2 版本。deployment myapp 代表要更新的 deployment 是 myapp,myapp=ikubernetes/myapp:v2 中的 myapp 是现在 Pod 中容器的名称,注意这里是容器名称,可以通过查看 Pod 的详细信息来获取到容器名称( kubectl describe pods + 容器名称 )。ikubernetes/myapp:v2 是要升级的镜像的名称。

1
[root@k8s001 ~]# kubectl set image deployment myapp myapp=ikubernetes/myapp:v2

查看升级结果

1
2
[root@k8s001 ~]# kubectl rollout status deployment myapp
deployment "myapp" successfully rolled out

查看 API 返回结果,可以看到现在返回的是 v2 版本的信息,说明我们升级成功了。

1
2
3
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
回滚操作
1
[root@k8s001 ~]# kubectl rollout undo deployment myapp

接口返回从 v2 变成 v1,说明回滚成功!

1
2
3
4
5
6
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

创建应用更多示例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Examples:
# Start a single instance of nginx.
kubectl run nginx --image=nginx

# Start a single instance of hazelcast and let the container expose port 5701 .
kubectl run hazelcast --image=hazelcast --port=5701

# Start a single instance of hazelcast and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container.
kubectl run hazelcast --image=hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default"

# Start a single instance of hazelcast and set labels "app=hazelcast" and "env=prod" in the container.
kubectl run hazelcast --image=hazelcast --labels="app=hazelcast,env=prod"

# Start a replicated instance of nginx.
kubectl run nginx --image=nginx --replicas=5

# Dry run. Print the corresponding API objects without creating them.
kubectl run nginx --image=nginx --dry-run

# Start a single instance of nginx, but overload the spec of the deployment with a partial set of values parsed from JSON.
kubectl run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }'

# Start a pod of busybox and keep it in the foreground, don't restart it if it exits.
kubectl run -i -t busybox --image=busybox --restart=Never

# Start the nginx container using the default command, but use custom arguments (arg1 .. argN) for that command.
kubectl run nginx --image=nginx -- <arg1> <arg2> ... <argN>

# Start the nginx container using a different command and custom arguments.
kubectl run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>

# Start the perl container to compute π to 2000 places and print it out.
kubectl run pi --image=perl --restart=OnFailure -- perl -Mbignum=bpi -wle 'print bpi(2000)'

# Start the cron job to compute π to 2000 places and print it out every 5 minutes.
kubectl run pi --schedule="0/5 * * * ?" --image=perl --restart=OnFailure -- perl -Mbignum=bpi -wle 'print bpi(2000)'